Apache Spark Machine Learning Blueprints by Alex Liu

Apache Spark Machine Learning Blueprints by Alex Liu

Author:Alex Liu [Liu, Alex]
Language: eng
Format: azw3, pdf
Publisher: Packt Publishing
Published: 2016-05-30T04:00:00+00:00


Feature extraction

In the previous chapters, we used Spark SQL and R for feature extraction and, for this real-life project, we will try to use MLlib for feature extraction; even in reality, users may use all the tools available.

A complete guide for MLlib feature extraction can be found at http://spark.apache.org/docs/latest/mllib-feature-extraction.html.

Here, we will use the Word2Vec method for extracting features from the social media data. The following code can be used to load a text file, parse it as an RDD of Seq[String], construct a Word2Vec instance, and then fit a Word2VecModel with the input data. Finally, we display the top 40 synonyms of some specific words such as leave or bad service.

import org.apache.spark._ import org.apache.spark.rdd._ import org.apache.spark.SparkContext._ import org.apache.spark.mllib.feature.{Word2Vec, Word2VecModel} val input = sc.textFile("text8").map(line => line.split(" ").toSeq) val word2vec = new Word2Vec() val model = word2vec.fit(input) val synonyms = model.findSynonyms("china", 40) for((synonym, cosineSimilarity) <- synonyms) { println(s"$synonym $cosineSimilarity") } // Save and load model model.save(sc, "myModelPath") val sameModel = Word2VecModel.load(sc, "myModelPath")



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.